perm filename ASPRAY.1[LET,JMC]1 blob sn#544099 filedate 1980-11-17 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.require "let.pub[let,jmc]" source
C00009 ENDMK
C⊗;
.require "let.pub[let,jmc]" source;
∂CSL Dr. William Aspray↓Department of Philosophy↓Williams College
↓xxx∞

Dear Dr. Apray:

	Many thanks for the copy of your Toronto talk.  Also Paul
Armer sent me a copy of your thesis draft.  I am
pleased that your detailed investigation mostly confirms my somewhat
idle speculation about the difference in point of view between
Turing and von Neumann.  It would be nice to have more evidence than
you have been able to get so far.  Have you talked to Church?
I talked to Stanislas Ulam, who was a colleague of von Neumann and
who also collaborated on the first computer chess program.  When I asked him
about von Neumann's attitude to the chess program, he said that
he never mentioned the program to von Neumann, which I suppose
indicates his opinion at the time about von Neumann's probable
interest.

	Here are some opinions on the outcome of the approaches of
the two men.

	1. Turing's idea of using standard computers rather than
designing special purpose machines has won.  Because many of the
people who spontaneously began working on artificial intelligence
before the late 1950s didn't really know about computers and
programming, this took until about 1960 to become the dominant
way of doing artificial intelligence.  I became interested in
artificial intelligence in 1949 while a graduate student but
didn't base my ideas on computers till I spent the summer of
1955 in the IBM research laboratory in Poughkeepsie.

	2. Turing's idea of starting with the child machine
has been re-invented many times but still hasn't paid off.  The
problem is that no-one has succeeded in inventing a mechanism
that can start where a child starts and be teachable as a child
is taught.  Reprogramming a computer is like doing education by
brain surgery; it requires a detailed understanding of the
existing mechanisms.  Since 1958 I have been working on the
epistemological problems of AI, and this can be considered to
be trying to reach the intellectual level from which Turing's child machine
must start.

	3. The actual success in AI has been accomplished by
identifying and embodying in computer programs detailed information
and mechanisms of adult thought - even expert thought.  Thus the
MYCIN program can't even start at the intellectual level of a medical
student and be told about bacterial blood diseases.  It must be
given expert information, but it still has no common sense.  It
doesn't really know about doctors, patients, hospitals and diseases.
It doesn't even know about time, causality, goals and achievement.

	4. It remains my opinion that the key to AI is the formalization
of common sense.

	5. Von Neumann's "general theory of automata" amounts really
to a few isolated ideas and had little subsequent influence.
The "reliable computers from unreliable elements" gives a way of
getting computers more reliable than an individual component.
However, so far there has been no need for that.  Present components
have MTBFs of thousands of years, except that there are probably
systematic processes that would shorten their lives.  The problem
has been that very large assemblages of components have much shorter
MTBFs, but the methods of using redundancy to deal with the phenomenon
have been special and have been enormously less expensive than those
described by von Neumann.

	The self reproducing automata ideas have not been much
pursued either theoretically or practically.  In one sense, the
fundamental theoretical idea for self-reproduction is given by
the Y-combinator of Church and Curry, namely 
(λf)[(λx.f(x(x)))(λx.f(x(x)))] which uses the function %2f%1 and
also keeps another copy for subsequent iterations.  Attempts to
simplify the von Neumann ideas don't know where to stop, short of
the Y-combinator.

	I also have some detailed comments on your thesis draft.

.sgn